A Model for Belief Revision

نویسندگان

  • João P. Martins
  • Stuart C. Shapiro
چکیده

It is generally recognized that the possibility of detecting contradictions and identifying their sources is an important feature of an intelligent system. Systems that are able to detect contradictions, identify their causes, or readjust their knowledge bases to remove the contradiction, called Belief Revision Systems, Truth Maintenance Systems, or Reason Maintenance Systems, have been studied by several researchers in Artificial Intelligence (AI). In this paper, we present a logic suitable for supporting belief revision systems, discuss the properties that a belief revision system based on this logic will exhibit, and present a particular implementation of our model of a belief revision system. The system we present differs from most of the systems developed so far in three respects: First, it is based on a logic that was developed to support belief revision systems. Second, it uses the rules of inference of the logic to automatically compute the dependencies among propositions rather than having to force the user to do this, as in many existing systems. Third, it was the first belief revision system whose implementation relies on the manipulation of sets of assumptions, not justifications. I. Issues in Belief Revision I . I . Introduction Most computer programs constructed by researchers in AI maintain a model of their environment (external and /o r internal), which is updated to reflect the perceived changes in the environment. This model is typically stored in a knowledge base, and the program draws inferences f rom the information in the knowledge base. All the inferences drawn are added to the knowledge base. One reason for model updating (and thus knowledge base updating) is the detection of con t rad ic tory i n f o r m a t i o n . In this case, the updating should be Artificial Intelligence 35 (1988) 25-79 0004-3702/88/$3.50 © 1988, Elsevier Science Publishers B.V. (North-Holland) 26 J.P. MARTINS AND S.C. SHAPIRO preceded by the decision about what proposition is the culprit for the contradiction, its removal from the knowledge base, j and the subsequent removal of every proposition that depends on the selected culprit. The conventional approach to handling contradictions consists of blaming the contradiction on the most recent decision made (chronological backtracking). An alternative solution, dependency-directed backtracking, consists of changing, not the last choice made, but a choice that caused the unexpected condition to occur. This second approach, proposed by Stallman and Sussman [53], started a great deal of research in one area of AI, which became loosely called belief revision. 2 Belief revision systems [8, 14, 26] are AI programs that deal with contradictions. They work with a knowledge base, performing reasoning from the propositions in the knowledge base, "filtering" those propositions so that only part of the knowledge base is perceived, namely, the propositions that are under consideration, called the set of believed propositions. When the belief revision system considers another one of these sets, we say that it changes its beliefs. Belief revision is an area of considerable interest, being both the subject of theoretical studies (e.g., [9, 18, 58]) and practical implementations (e.g., [4, 39, 411). Typically, a belief revision system explores alternatives, makes choices, explores the consequences of its choices, and compares results obtained when using different choices. If, during this process, a contradiction is detected, the belief revision system will revise the knowledge base, changing its beliefs in order to get rid of the contradiction. There are several problems that researchers in belief revision have to address: the inference problem, which studies how do new beliefs follow from old ones; the nonrnonotonicity problem, which studies the methods of recording that one belief depends on the absence of another; dependency recording, which concerns the study of the methods for recording that one belief depends on another one; disbelief propagation, which worries about how one fails to disbelieve all the consequences of a proposition that is disbelieved; and, finally, the revision of beliefs, which studies how to change beliefs in order to get rid of a contradiction. No single system or researcher has addressed all these problems. In the remainder of this section, we will take a look at some of the issues involved in each one of these areas and discuss how they had been addressed by the following researchers: Doyle [11-13], McAllester [31-33], McDermott [3435], and de Kleer [5-7, 10]. Most of the work on belief revision has been Or making it inaccessible to the program. 2 The field of belief revision in AI is usually recognized to have been initiated by the work of Jon Doyle [11, 12], although a system that performs belief revision (in robot planning) was developed simultaneously by Philip London [23]. A MODEL FOR BELIEF REVISION 27 influenced by these researchers: Doyle, who started the area, led to the studies of algorithm improvement by Goodwin [16-18] and by Petrie [42] and several applications, for example, [51, 55]; McAllester's system was used in several applications, for example, [20, 43]; McDermott's work led to the development of the first commercial system with belief revision, DUCK [52]; de Kleer's work, which we believe was highly influenced by the work described in this paper (see also [25, 28-30]), was the starting point to the implementation for the KEE worlds [39, 40].

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Cognitive Style and Auditor's judgment: Does Cognitive Style mitigate the Impact of recency bias on the auditors’ belief revision process?

Abstract: The purpose of the present study is two-dimensional. First, provide more evidence of the effects of information order on auditors' beliefs, and secondly, examine whether auditors vary their beliefs in different cognitive styles. These goals were achieved through the use of experienced professional auditors. To determine the effects of information sequence, Hogarth and Einhorn’s belie...

متن کامل

Mutual Belief Revision: Semantics and Computation

This paper presents both a semantic and a computational model for multi-agent belief revision. We show that these two models are equivalent but serve different purposes. The semantic model displays the intuition and construction of the belief revision operation in multi-agent environments, especially in case of just two agents. The logical properties of this model provide strong justifications ...

متن کامل

Towards a "Sophisticated" Model of Belief Dynamics. Part II: Belief Revision

In the companion paper (Towards a “sophisticated” model of belief dynamics. Part I), a general framework for realistic modelling of instantaneous states of belief and of the operations involving them was presented and motivated. In this paper, the framework is applied to the case of belief revision. A model of belief revision shall be obtained which, firstly, recovers the Gärdenfors postulates ...

متن کامل

Belief Revision for Cognitive Agents: Individual Variation in Multi-Agent Systems

This paper presents a formal framework for modeling belief revision in cognitive agents, to integrate realistic epistemic dynamics in agent-based social simulation. Special emphasis is given to individual variation in MAS: agents are allowed to change their beliefs according to different cognitive styles, and such individual distinctions are captured and specified by the model. Although the fra...

متن کامل

UEcho: A Model of Uncertainty Management in Human Abductive Reasoning

This paper explores the uncertainty aspects of human abductive reasoning. Echo, a model of abduction based on the Theory of Explanatory Coherence (Thagard, 1992a), captures many aspects of human abductive reasoning, but fails to sufficiently manage the uncertainty in abduction. In particular, Echo does not handle belief acquisition and dynamic belief revision, two essential components of human ...

متن کامل

Belief Revision as Applied Within a Descriptive Model of Jury Deliberations

Belief revision is a well-research topic within AI. We argue that the new model of belief revision as discussed here is suitable for general modelling of judicial decision making, along with extant approach as known from jury research. The new approach to belief revision is of general interest, whenever attitudes to information are to be simulated within a multi-agent environment with agents ho...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1984